Pharmacoepidemiology and Drug Safety
○ Wiley
Preprints posted in the last 30 days, ranked by how well they match Pharmacoepidemiology and Drug Safety's content profile, based on 13 papers previously published here. The average preprint has a 0.02% match score for this journal, so anything above that is already an above-average fit.
Koh, H. J. W.; Trin, C.; Ademi, Z.; Zomer, E.; Berkovic, D.; Cataldo Miranda, P.; Gibson, B.; Bell, J. S.; Ilomaki, J.; Liew, D.; Reid, C.; Lybrand, S.; Gasevic, D.; Earnest, A.; Gasevic, D.; Talic, S.
Show abstract
BackgroundNon-adherence to lipid-lowering therapy (LLT) affects up to half of patients and contributes substantially to preventable cardiovascular morbidity and mortality. Existing measures, such as the proportion of days covered, provide cross-sectional summaries but fail to capture the dynamic patterns of adherence over time. Although group-based trajectory modelling identifies distinct longitudinal adherence patterns, no approach currently predicts trajectory membership prospectively while incorporating patient-reported barriers. We developed BRIDGE, a barrier-informed Bayesian model to predict adherence trajectories and identify their underlying drivers. MethodsBRIDGE incorporates patient-reported barriers as structured prior information within a Bayesian framework for adherence-trajectory prediction. The model was designed not only to estimate which patients are likely to follow different adherence trajectories, but also to generate clinically interpretable probability estimates that help explain why those trajectories may arise and what modifiable factors may be most relevant for intervention. ResultsBRIDGE achieved a macro AUROC of 0.809 (95% CI 0.806 to 0.813), comparable to random forest (0.815 (95% CI 0.812 to 0.819)) and XGBoost (0.821 (95% CI 0.818 to 0.824)), two widely used machine-learning benchmarks for structured clinical prediction. Calibration was superior to random forest (Brier score 0.530 vs 0.545; ), and performance was stable across six independent training runs (AUROC SD = 0.003). Incorporating barrier-informed priors improved accuracy by 3.5% and calibration by 5.5% compared to flat priors, showing that incorporation of patient-reported barriers added value beyond electronic medical record data alone. Four clinically distinct adherence trajectories were identified: gradual decline associated with treatment deprioritisation amid polypharmacy (10.4%), early discontinuation linked to asymptomatic risk dismissal (40.5%), rapid decline associated with intolerance (28.8%), and persistent adherence (20.2%). Counterfactual analysis identified trajectory-specific intervention levers. ConclusionsBRIDGE provides accurate and well-calibrated prediction of adherence trajectories while offering clinically actionable insights into their underlying drivers. By integrating patient-reported barriers with routine clinical data, the model supports targeted, mechanism-informed interventions at the point of prescribing to improve adherence to cardioprotective therapies. FundingMRFF CVD Mission Grant 2017451 Evidence before this studyWe searched PubMed and Scopus from database inception to December 2025 using the terms "medication adherence", "trajectory", "prediction model", "Bayesian", "lipid-lowering therapy", and "barriers", with no language restrictions. Group-based trajectory modelling has consistently identified three to five adherence patterns across cardiovascular cohorts; however, these applications have been descriptive rather than predictive. Machine-learning models for adherence prediction achieve moderate discrimination but treat adherence as a binary or continuous outcome, thereby overlooking the clinically meaningful heterogeneity captured by trajectory approaches. One prior study applied a Bayesian dynamic linear model to examine adherence-outcome associations, but it did not predict adherence trajectories or incorporate patient-reported barriers. To our knowledge, no published model integrates patient-reported barriers into trajectory prediction. Added value of this studyBRIDGE is, to our knowledge, the first model to incorporate patient-reported adherence barriers as hierarchical domain-informed priors within a Bayesian framework for trajectory prediction. Using 108 predictors derived from routine electronic medical records, the model achieves discrimination comparable to state-of-the-art machine-learning approaches while additionally providing uncertainty quantification, barrier-level interpretability, and counterfactual insights to inform intervention strategies. The identified trajectories differed not only in adherence level but also in switching behaviour, drug-class evolution, and medication burden, suggesting distinct underlying mechanisms of non-adherence that may require tailored clinical responses. Implications of all the available evidenceEach adherence trajectory implies a distinct intervention target: asymptomatic risk communication for early discontinuers (40.5% of patients), proactive tolerability management for rapid decliners, medication simplification for patients with gradual decline associated with polypharmacy, and maintenance support for persistent adherers. By integrating routinely collected clinical data with patient-reported barriers, BRIDGE can be deployed within existing primary care EMR infrastructure to generate actionable, trajectory and patient--specific recommendations at the point of prescribing, helping to bridge the gap between adherence measurement and targeted adherence management.
Reisberg, S.; Oja, M.; Mooses, K.; Tamm, S.; Sild, A.; Talvik, H.-A.; Laur, S.; Kolde, R.; Vilo, J.
Show abstract
Background: The increasing availability of routinely collected health data offers new opportunities for population-level research, yet access to comprehensive, linked, and standardised datasets remains limited. We describe EST-Health-30, a large-scale, population-representative health data resource from Estonia. Methods: EST-Health-30 comprises a random 30% sample of the Estonian population (~500,000 individuals), with longitudinal data from 2012 to 2024 and annual updates planned through 2026. Individual-level records are linked across five nationwide databases, including electronic health records, health insurance claims, prescription data, cancer registry, and cause of death records. A privacy-preserving hashing approach ensures consistent cohort inclusion over time while maintaining pseudonymisation. All data are harmonised to the Observational Medical Outcomes Partnership (OMOP) Common Data Model (version 5.4) using international standard vocabularies. Data quality was assessed using established OMOP-based validation frameworks. Results: The dataset contains rich multimodal information on diagnoses, procedures, laboratory measurements, prescriptions, free-text clinical notes, healthcare utilisation, and costs, with high population coverage and longitudinal depth. Data quality assessment showed high completeness and consistency, with 99.2% of applicable checks passing. The age-sex distribution closely reflects the national population, supporting representativeness, though coverage is marginally below the target 30% (29.2%), primarily attributable to recent immigrants without health system contact. The dataset enables construction of detailed clinical cohorts, analysis of disease trajectories, and evaluation of healthcare utilisation and outcomes across the life course. Conclusions: EST-Health-30 is a comprehensive, standardised, and population-representative real-world data resource that supports epidemiological, clinical, and methodological research. Its alignment with the OMOP CDM facilitates reproducible analytics and participation in international federated research networks, while secure access infrastructure ensures compliance with data protection regulations.
Patil, P.; Durvasula, R.; Patel, S.; Malik, M.; Patil, S.
Show abstract
Importance: Glucagon like peptide 1 receptor agonists (GLP 1 RAs) and dual glucose dependent insulinotropic polypeptide/glucagon like peptide 1 receptor agonists have demonstrated what may be considered transformative efficacy in recent randomized clinical trials for the treatment of obesity, yielding substantial weight loss in a majority of participants. However, the extent to which these trial results translate into routine clinical practice particularly within the rapidly expanding direct to consumer (DTC) telehealth sector serving self pay populations remains insufficiently characterized. As access to and affordability of these therapies broaden beyond traditional insurance based care models, evaluating real world effectiveness, safety, and patient engagement among individuals shouldering the full financial cost of treatment is essential for informing future models of obesity care delivery. Objective:To assess long term medication specific weight loss outcomes, including gender specific responses and discrepancies, and explore usage trends in a real world, self pay telehealth cohort receiving GLP 1 RA therapy, using an Observational study design (Retrospective data analysis). Setting and Participants:Retrospective data of patients enrolled in electronic health records (EHR) from Carevalidate, a national US telehealth platform provider for Online TeleHealth companies. The data collected ranged for a total of 703 days from January 12, 2024, to December 15, 2025. The analysis included 572 adults with overweight or obesity diagnosis who initiated treatment with semaglutide or tirzepatide and completed a minimum of 9 months of active follow up. Patients with insufficient follow up or those utilizing insurance coverage were excluded to isolate the self pay phenotype. Exposures: Prescription of semaglutide or tirzepatide (injectable or oral formulations) via synchronous or asynchronous telehealth consultations, titrated according to standard clinical protocols adapted for patient tolerance and financial sustainability. Main Outcomes and Measures: The primary outcome was percentage total body weight loss (%TBWL) from baseline to the last recorded encounter. Secondary outcomes included categorical responder rates (5%, 10%, 15%, >20% weight loss), weight loss velocity analysis, and telehealth utilization metrics (frequency of encounters and visit intervals) including gender differences in approaching the telehealth program. Results: The final analytical cohort included 572 patients (79.2% female; 20.8% male). Overall, 95.8% (548/572) achieved weight loss, while 3.7% experienced weight gain. At 12 months, the mean %TBWL was 13.8% for the semaglutide cohort (n=450) and 12.5% for the tirzepatide cohort (n=122), with no statistically significant difference between the two medications (P >.05), contrary to standard clinical trial data suggesting tirzepatide superiority. A significant gender difference was observed: females were significantly more in number comprising 80% of the cohort and were likely to be "major responders" (>20% weight loss) compared to males (29.8% vs 5.9%; P <.001). Conversely, males demonstrated significantly higher utilisation rates, attending more frequent encounters (mean 13.5 vs 12.7; P =.028) with shorter intervals between visits (35.6 vs 44.1 days; P =.009) compared to females. Weight loss velocity for both medications peaked during months 1 to 3 (~1.07 lbs/week) and declined substantially by months 12 to 15, indicating a plateau effect independent of the specific agent used. Conclusions and Relevance: Telehealth-managed GLP 1 treatment in a self pay population demonstrates high efficacy comparable to clinical trials for semaglutide. However, tirzepatide outcomes fell short of trial benchmarks, likely due to economic barriers preventing optimal dose titration and lower sample size. The study identifies a discrepancy where females approach the telehealth based self pay system more but males engage more frequently with the digital platform which could be due to inferior physiological outcomes ( less weight loss and more non responders) compared to females.This suggests that while telehealth is a viable model for long term obesity care, the "one size fits all" approach may be insufficient for under responders, who may require distinct titration strategies or tailored behavioral interventions to overcome baseline genetic and biological resistance.
Ding, X.; Vadini, V.; Kim, C.; Bu, F.; Chen, H. Y.; Chai, Y.; Duarte-Salles, T.; Hsu, J. C.; Khera, R.; Lau, W. C. Y.; Man, K. K. C.; Nagy, P.; Ostropolets, A.; Pistillo, A.; Pratt, N.; Roel, E.; Seager, S.; Van Zandt, M.; Yuan, L.; Hripcsak, G.; Mathioudakis, N.; Suchard, M. A.; Nishimura, A.
Show abstract
Importance Women have been under-represented in clinical trials of type 2 diabetes mellitus (T2D), and evidence on sex differences in effectiveness of T2D treatments remains limited. Objective To assess sex differences in comparative effectiveness and safety of four second-line antidiabetic agents: glucagon-like peptide-1 receptor agonists (GLP-1RA), sodium-glucose cotransporter-2 inhibitors (SGLT2i), dipeptidyl peptidase-4 inhibitors (DPP4i), and sulfonylureas (SU). Design Retrospective cohort study using an active-comparator new-user design, following each participant till treatment discontinuation or end of data. Setting Multinational study across ten real-world databases from the Observational Health Data Sciences and Informatics (OHDSI) network in the United States, United Kingdom, Germany, and Spain. Participants 5.15 million adults with T2D who initiated one of the four second-line therapies following metformin during 1992-2021. Exposures GLP-1RA, SGLT2i, DPP4i, or SU. Main Outcomes and Measures Cardiovascular effectiveness as measured through 7 outcomes (major adverse cardiovascular events and glycemic control) and safety through 18 outcomes as highlighted by ADA guideline. Hazard ratios (HRs) are estimated separately for women and men using propensity score-stratified Cox models with empirical calibration. Sex differences were tested using Z-tests on log-HR differences. Results Drug initiation rates differed by sex with 9.28% of women initiating on GLP-1RA, 11.91% SGLT2i, 27.81% DPP4i, and 50.99% SU; the rates among the men were 5.41%, 12.84%, 24.64%, and 57.10%. No significant sex differences were observed for cardiovascular effectiveness outcomes. Several safety outcomes showed significant sex differences that are consistent across drug comparisons. Focusing on GLP-1RA compared to SGLT2i for brevity, GLP-1RA users experienced the following comparative benefits and risks: higher risk of acute pancreatitis among women (HR 1.39 [1.13, 1.70]) while non-differential risk among men (HR 0.91 [0.74, 1.12]) with p = 0.005 for the test of difference; non-differential risk of hypotension among women (HR 1.08 [0.98, 1.19]) while lower risk among men (HR 0.87 [0.78, 0.96]) with p = 0.003. Where no sex differences were found, our findings were consistent with existing evidence. Conclusions and Relevance This large-scale multinational study on antidiabetic agents identified clinically relevant sex differences, which are biologically plausible but previously lacked clinical evidence. Our findings reinforce the importance of tailoring T2D management according to sex.
Smith, M.; Dixon, S.; Ziyenga, S.; Hirst, J. A.; Bankhead, C. R.; Nicholson, B. D.
Show abstract
Hormone replacement therapy (HRT) with oestrogen and progestogen is a common medical treatment for alleviating symptoms of menopause. Since 2015, its use has been increasing in the UK. Unscheduled bleeding can be a symptom of endometrial cancer, and guidelines state that women experiencing this should have an urgent referral for suspected endometrial cancer. However, unscheduled bleeding is also common in women taking HRT, particularly in the first few months after starting HRT or if there is a change in regimen. Current guidelines may result in women on HRT receiving referrals that are not necessary and undergoing unpleasant and invasive tests such as hysteroscopy. However, there is a lack of current information to guide recommendations. This protocol describes a cohort study in the ORCHID-e database of anonymised patient records from English primary care. We will use a cohort of women aged over 40 years starting on HRT with oestrogen and progestogen, age matched to women who have not started HRT. Exposure will be a prescription for oestrogen containing HRT with no previous prescription for oestrogen containing HRT in the previous year. Index date in each matched set will be the date of this prescription. Prescriptions for progestogen containing drugs will not be used to define the exposure, but this information will be extracted to describe the study population and for sensitivity analyses. Outcomes will be consultations for unscheduled bleeding, urgent referrals for suspected endometrial cancer, and diagnosis of endometrial cancer. Women will be followed up until they change exposure status or are otherwise censored. Women who start taking HRT in follow-up will re-enter the cohort in the exposed group. We will describe proportions of women with a code for consulting with unscheduled bleeding, proportions of those women referred for further investigation on the pathway for suspected endometrial cancer, and proportions diagnosed with endometrial cancer within one year of referral. We will investigate the diagnostic accuracy of unscheduled bleeding for endometrial cancer separately for women on HRT and those not on HRT. Analyses will be done by 6-month categories of time since index, age, calendar year, sociodemographic variables, risk factors for endometrial cancer, type of HRT.
Liu, Y.; Levinson, S. L.; Kowalik, E.; Pronchik, J.; Kobzik, L.; DiNubile, M. J.
Show abstract
Background Plasma gelsolin (pGSN) is a non-immunosuppressive anti-inflammatory immunomodulator with demonstrated efficacy in animal models of acute lung injury. Its potential role in moderate-to-severe acute respiratory distress syndrome (ARDS) is currently under investigation. Methods We conducted a phase 1, randomized, double-blind, placebo-controlled study to evaluate the safety, tolerability, and pharmacokinetics of recombinant human pGSN (rhu-pGSN) following intravenous (IV) administration to healthy volunteers. Thirty-two participants were assigned to 4 sequentially ascending dose cohorts (6, 12, 18, 24 mg/kg of body weight) to receive five IV infusions of rhu-pGSN or saline placebo. Each cohort includes 8 subjects randomized 3:1 with rhu-pGSN or placebo. Doses were administered at 0 hours, 12 hours, 36 hours, 60 hours, and 84 hours. The primary outcome is the incidence and severity of clinical and laboratory AEs regardless of causality. Secondary outcomes include the pharmacokinetics of IV rhu-pGSN and the presence of anti-rhu-pGSN antibodies at Day 28. Results Overall, 10 subjects (41.7%) who received rhu-pGSN reported a total of 13 adverse events (AEs), and 1 subject (12.5%) who received placebo reported an AE. All AEs were mild or moderate. AEs in system organ classes that were reported by 2 or more subjects in either arm were skin and subcutaneous tissue disorders (12.5% rhu-pGSN; 0% placebo), gastrointestinal disorders (8.3% rhu-pGSN; 0% placebo), and nervous system disorders (12.5% rhu-pGSN; 12.5% placebo). No AEs by preferred term were reported by more than 1 subject in either arm. Three subjects (12.5%) experienced an AE assessed as related to study drug. No serious AEs occurred, and no AEs led to study discontinuation, dose interruption/reduction, or death. There were no apparent between-treatment differences in laboratory abnormalities, vital signs, or electrocardiogram findings. Conclusions Overall, in this study, IV rhu-pGSN (up to 24 mg/kg daily) appeared safe and well tolerated compared to placebo. The median half-life of rhu-pGSN exceeded 14 h across all dosing regimens, supporting once daily IV dosing in healthy subjects. Trial registration This study was registered with ClinicalTrials.gov on 2023-03-29 under the registration identifier NCT05789745.
Dasgupta, N.; Sibley, A. L.; Gildner, P.; Gora Combs, K.; Post, L. A.; Tobias, S.; Kral, A. H.; Pacula, R. L.
Show abstract
Drug overdose deaths in the United States reached record levels during the fentanyl era before recently declining. A plausible hypothesis is that a sudden drop in fentanyl purity beginning in 2023 caused the downturn in overdose mortality. We evaluated this hypothesis by replicating a published analysis with regional overdose data, using models that account for time trends and autocorrelation, and negative control indicators to test for spurious correlation. When fentanyl purity was rising, the national purity series did not track overdose increases in most regions and showed only a modest association in the West. When both purity and mortality later declined, the observed associations were also seen with unrelated macroeconomic indicators that shared the same time pattern. National fentanyl purity alone does not provide a sufficient explanation for recent overdose declines.
Huang, L.; Xu, X.; Matsushita, K.; Brady, T. M.; Appel, L. J.; Hoorn, E. J.; Tian, M.; Aminde, L. N.; Trieu, K.; Neal, B.; Marklund, M.
Show abstract
ABSTRACT Objective To estimate the benefit and risk of replacing regular salt with potassium-enriched salt. Design Comparative risk assessment modelling. Setting Worldwide Participants Adult populations aged 25 and above. Intervention (1) worldwide replacement of all salt (discretionary salt used for seasoning or cooking in the home, and non-discretionary salt used in processed and restaurant foods); (2) worldwide replacement of just discretionary salt; (3) worldwide replacement of just non-discretionary salt; (4) replacement of discretionary salt just for people with diagnosed hypertension; and (5) replacement of discretionary salt just for people with treated hypertension. Main outcome measures For scenarios 1-3, we estimated benefits including deaths, new cases and disability-adjusted-life-years (DALYs) from cardiovascular disease and chronic kidney disease (CKD), from blood pressure-lowering as well as harms (CVD deaths) caused by hyperkalaemia among people with CKD stages G3-G5. Results Replacement of all salt worldwide could prevent 2.96 (95% uncertainty interval 2.81-3.12) million deaths, 10.17 (9.59-10.70) million new cases of disease and 69.43 (65.61-72.92) million disability-adjusted life years (DALYs) each year. These figures represent 14.6%, 13.1% and 16.5% of the annual global disease burden attributable to CVD and CKD. Replacement of all discretionary salt (1.85, 1.74-1.97 million deaths) would have a greater impact on mortality than replacement of all non-discretionary salt (1.56, 1.46-1.67 million deaths). In people with CKD Stage G3-G5, there would be a net benefit - replacement of all salt would prevent 0.75 (0.71-0.80) million deaths but might cause 0.10 (0.09-0.11) million deaths from hyperkalaemia. Discretionary salt replacement only among diagnosed or treated hypertensives would prevent 0.59 (0.55-0.63) million and 0.48 (0.45-0.52) million deaths, respectively. Conclusion Switching regular salt to potassium-enriched salt appears to offer large potential for health gains under diverse scenarios, including for people with CKD.
Mogeni, P.; Ochieng, J. B.; Kariuki, K.; Rwigi, D.; Atlas, H. E.; Tickell, K. D.; Aluoch, L. R.; Sonye, C.; Apondi, E.; Ambila, L.; Diakhate, M. M.; Singa, B. O.; Liu, J.; Platts-Mills, J. A.; Saidi, Q.; Denno, D. M.; Fang, F. C.; Walson, J. L.; Houpt, E. R.; Pavlinac, P. B.
Show abstract
BackgroundThe Toto Bora trial tested whether a course of azithromycin reduced rates of re-hospitalization or death in the 6 months following hospitalization among Kenyan children. We hypothesized that azithromycin would reduce enteric bacteria and increase carriage of macrolide resistance in the subsequent 3 months. MethodsKenyan children (1-59 months) hospitalized and subsequently discharged for non-traumatic conditions provided fecal samples before and 3 months after randomization to a 5-day course of azithromycin or placebo. Quantitative PCR identified enteropathogens and AMR-conferring genes in fecal samples. Generalized estimating equations assessed the impact of the randomization arm on pathogen and resistance gene detection, accounting for baseline presence and site. ResultsAmong 1,393 baseline stools, 12.4% had at least one bacterial enteropathogen, 94.7% had at least one macrolide-resistance gene, and 92.6% had at least one beta-lactamase-resistance gene identified. At month 3, children randomized to azithromycin had a 6.1% higher likelihood of carrying a macrolide resistance gene compared to placebo (adjusted prevalence ratio [aPR], 1.06; 95% CI, 1.04-1.08; P<0.001). Specifically, azithromycin randomization was associated with a higher relative prevalence of erm(B) (aPR, 1.09 [95% CI, 1.04-1.15]; P=0.001), erm(C) (aPR, 1.23 [95% CI, 1.14-1.31]; P<0.001), msr(A) (aPR, 1.14 [95% CI, 1.04-1.25]; P=0.007), and msr(D) (aPR, 1.07 [95% CI, 1.03-1.11]; P=0.001). There was no difference in overall bacterial pathogen prevalence (18.9% vs 17.3%) between randomization arms, but a slightly lower proportion of children had Shigella after randomization in the azithromycin arm (3% vs. 5%, aPR, 0.79 [95% CI, 0.62, 1.01]; P=0.063). InterpretationAzithromycin at hospital discharge was associated with higher carriage of macrolide-resistance-conferring genes in the post-discharge period compared with placebo, without significant declines in enteric pathogen carriage other than modest changes to Shigella. The potential benefits and risks of empiric azithromycin need to be considered, as children are increasingly exposed to this broad-spectrum antibiotic.
Salim, A.; Allen, M.; Mariki, K.; Pallangyo, T.; Maina, R.; Mzee, F.; Minja, M.; Msovela, K.; Liana, J.
Show abstract
In the context of global health, the ability of frontline primary health providers to identify potential Drug-Drug Interactions (DDIs) is a critical component of patient safety. This is particularly true in settings like Tanzania, where drug dispensers often serve as the primary point of contact for healthcare. In this study, we establish a baseline for drug decision-making capabilities across multiple cadres of healthcare providers in Kibaha, Tanzania. We specifically distinguish between the ability to recognize safe drug combinations versus harmful ones. The findings reveal a critical asymmetry in provider performance: while professional training improves the recognition of safe combinations, it provides no advantage over lay intuition (and in some cases, a significant disadvantage) in detecting potentially harmful interactions.
Hughes, N.; Hogenboom, J.; Carter, R.; Norman, L.; Gouthamchand, V.; Lindner, O.; Connearn, E.; Lobo Gomes, A.; Sikora-Koperska, A.; Rosinska, M.; Pogoda, K.; Wiechno, P.; Jagodzinska-Mucha, P.; Lugowska, I.; Hanebaum, S.; Dekker, A.; van der Graaf, W.; Husson, O.; Wee, L.; Feltbower, R.; Stark, D.
Show abstract
Background: Population-based cancer registers (PBCR) are important for monitoring trends in cancer epidemiology, facilitating the implementation of effective cancer services. Adolescents and Young Adult (AYA) with cancer are a patient group with a unique set of needs. The utility of PBCR in AYA is limited by the lack of AYA-specific data items. STRONG AYA, an international multidisciplinary consortium is addressing this through federated learning (FL) methodology and novel data visualisation concepts. A Core Outcome Set (COS) has been developed to measure outcomes of importance through clinical data and Patient Reported Outcomes (PROs). We describe how data from the Yorkshire Specialist Register of Cancer in Children and Young People (YSRCCYP), a PBCR in the UK is being used within STRONG AYA and how the subsequent analyses can guide patient consultations. Methods: Data from the YSRCCYP were imported into a Vantage 6 node, from which FL analyses are performed along with data provided by other consortium members. The results are extracted into the PROMPT software and integrated into patient electronic healthcare records. Results: Healthcare professionals can view the results of individual PROs at various time points and in comparison, to summary analyses carried out within the STRONG AYA infrastructure. Results can be filtered by age, disease, country and stage. Conclusion: We have demonstrated how a regional PBCR can contribute to a pan-European infrastructure and analyses viewed to enhance patient consultations. Such analyses have the potential to be used for research and policy-making, improving outcomes for AYA.
Ytsma, C. R.; Torralbo, A.; Fitzpatrick, N. K.; Pietzner, M.; Louloudis, I.; Nguyen, D.; Ansarey, S.; Denaxas, S.
Show abstract
Objective The aim of this study was to develop and validate an automated, scalable framework to harmonise fragmented UK primary care prescription records into a research-ready dataset by mapping four diverse medical ontologies to a unified, historically comprehensive reference standard. Materials and Methods We used raw prescription records for consented participants in the UK Biobank, in which participants are uniquely characterized by multiple data modalities. Primary care data were preprocessed by selecting one drug code if multiple were recorded, cleaning codes to match reference presentations, expanding code granularity based on drug descriptions, and updating outdated codes to a single reference version. Harmonisation entailed mapping British National Formulary (BNF) and Read2 codes to dm+d, the universal NHS standard vocabulary for uniquely identifying and prescribing medicines. Harmonised dm+d records were then homogenised to a single concept granularity, the Virtual Medicinal Product (VMP). We validated our methods by creating medication profiles mapping contemporary drug prescribing patterns in 312 physical and mental health conditions. Results We preprocessed 57,659,844 records (100%) from 221,868 participants (100%). Of those, 48,950 records were dropped due to lack of drug code. 7,357,572 records (13%) used multiple ontologies. Most (76%) records were encoded in BNF and most had the code granularity expanded via the drug description (N=28,034,282; 49%). 41,244,315 records (72%) were harmonised to dm+d and 99.98% of these were converted to VMP as a homogeneous dataset. Across 312 diseases, we identified 23,352 disease-drug associations with 237 medications (represented as BNF subparagraphs) that survived statistical correction of which most resembled drug - indication pairs. Conclusion Our methodology converts highly fragmented and raw prescription records with inconsistent data quality into a streamlined, enriched dataset at a single reference, version, and granularity of information. Harmonised prescription records can be easily utilised by researchers to perform large-scale analyses in research.
Green, J.; Fonseca, L. M.; Simon, S. S.; Schnaider Beeri, M.; Tafuto, B.; Byham-Gray, L. D.; Kaplan, J.
Show abstract
Background: Gabapentin prescriptions have increased 123% since 2010, reaching 59 million annually and 15.5 million patients. Recent evidence indicates that concomitant use of gabapentin and dihydropyridine calcium channel blockers (DHP-CCBs) amplifies dementia risk through a dual neuronal calcium signaling blockade mechanism. Whether these cognitive effects are reversible upon discontinuation, and whether the combination accelerates decline in patients with established dementia, remains unknown. Methods: We conducted two complementary studies using the Rutgers Clinical Research Data Warehouse (CRDW; 2015-2024). Study 1: A self-controlled case series (SCCS; N=3,058) comparing cognitive event rates during concomitant gabapentin-DHP-CCB use versus after discontinuation, using strictly duration-matched observation windows. Study 2: A cohort study (N=320) of patients with established dementia initiating gabapentin, comparing outcomes between DHP-CCB, non-DHP-CCB, and no-CCB users. Findings were externally replicated in the NIH All of Us Research Program Controlled Tier (N=8,853). Results: In the CRDW self-controlled analysis, event rates were significantly higher during combination use versus after discontinuation: falls (RR 1.34, 95% CI 1.11-1.61), cognitive symptoms (RR 1.67, 95% CI 1.38-2.01), and composite cognitive endpoint (RR 1.32, 95% CI 1.09-1.59). Effects were greatest when both drugs were discontinued (cognitive symptoms RR 2.21; falls RR 1.76). Protopathic bias was ruled out by monotonically increasing RRs across 0-, 30-, and 60-day lag conditions. In the dementia acceleration cohort, DHP-CCB use tripled encephalopathy risk (HR 3.18, 95% CI 1.36-7.46), with zero events among non-DHP CCB users. External replication in All of Us confirmed all primary outcomes (falls RR 1.53, cognitive symptoms RR 1.26, composite RR 1.42; all p<0.001). A non-DHP CCB negative control in All of Us confirmed mechanistic specificity: cognitive symptom and encephalopathy reversal signals were absent with verapamil/diltiazem. CKD amplified effects in both datasets, consistent with gabapentin accumulation through impaired renal clearance. Conclusions: Cognitive effects associated with concomitant gabapentin-DHP-CCB use appear substantially reversible upon discontinuation, replicated across two independent datasets. The DHP-specific pattern, confirmed through a pharmacological negative control, supports a neuronal L-type calcium channel mechanism. Clinicians should review gabapentin-DHP-CCB combinations in patients with cognitive complaints or falls, as deprescribing - particularly of both agents - may produce meaningful improvement.
Kazgan, M.
Show abstract
Background: Digital health platforms can improve clinical efficiency and patient outcomes, but adoption in routine care remains limited due to workflow and integration challenges. Rheumatoid arthritis (RA) management relies on consistent capture of patient-reported and clinical data, which is often time-intensive and inconsistently documented. Objective: To assess the impact of the cliexa-RA digital platform on patient experience, physician workflow, and cost-related outcomes using the Quadruple Aim framework. Methods: A six-month pilot study was conducted at the Colorado Arthritis Center involving 300 RA patients. Patients completed a 16-question intake (RAPID3-based), followed by clinician-entered joint assessments. The platform generated five disease activity scores (DAS28-ESR, DAS28-CRP, SDAI, CDAI, RAPID3) and produced EMR-compatible outputs. Time metrics, patient satisfaction, and workflow efficiency were evaluated. Results: Mean patient intake time was 2.4 minutes, a 52% reduction compared to paper-based processes. Clinician time for calculation and documentation decreased by 77%, with near real-time EMR integration. Overall patient satisfaction was high (3.55/4), with 85% recommending the platform. Physicians reported improved documentation efficiency and workflow integration. Administrative cost reductions were observed through decreased reporting burden and improved compliance with quality reporting requirements. Conclusions: The cliexa-RA platform significantly improved efficiency and user experience in RA management. These findings support the role of integrated digital tools in reducing administrative burden and enabling scalable, data-driven care, with potential downstream benefits for cost and population health.
Ales, M. W.; Larrison, C. D.; Rodrigues, S. B.
Show abstract
Abstract Background Between 2021 and 2022, primary care obesity management was entering the early diffusion phase of newer anti obesity pharmacotherapy, as GLP1 based treatments began reshaping expectations. However, it was unclear whether primary care clinicians and practice environments were prepared to deliver comprehensive obesity care. (1,2) Methods In 2021 to 2022, we surveyed 276 clinicians from three cohorts: an opt-in national physician panel (Cohort A), clinicians from an integrated health system (Cohort B), and clinicians from a rural accountable care organization (Cohort C). The survey, informed by formative patient and physician focus groups conducted in 2021, assessed current and desired competence, attitudes, confidence, perceived forces for change, and barriers to obesity care. Analyses were descriptive (means and standard deviations). Results Across cohorts, desired competence exceeded current competence. The largest gaps involved recommending behavioral interventions, developing comprehensive care plans, and providing ongoing obesity management support. Attitudes toward obesity care were generally favorable, while confidence that current practices reflected best practice was only moderate. Professional and personal forces for change were moderate, patient driven motivators were moderate to high, whereas social (peer/organizational) reinforcement was weak. Reported barriers extended beyond knowledge deficits to include patient engagement, competing demands, cost, and practical constraints. Conclusions At the threshold of the GLP1 era, primary care clinicians were motivated to improve obesity care but lacked consistent support to deliver comprehensive management. The relative absence of peer and organizational reinforcement suggests that readiness for change reflected not only individual knowledge and attitudes, but also the degree of peer and organizational reinforcement that supports comprehensive obesity care in routine practice.
Fitzgerald, O.; Keller, E.; Illingworth, P.; Lieberman, D.; Peate, M.; Kotevski, D.; Paul, R.; Rodino, I.; Parle, A.; Hammarberg, K.; Copp, T.; Chambers, G. M.
Show abstract
Study questionWhat are the characteristics and treatment outcomes of women who undertook planned egg freezing (PEF) in Australia and New Zealand between 2009 and 2023? Summary answerThere has been an average yearly increase in the uptake of PEF of 35%, with most women undergoing a single PEF procedure in their mid-thirties. Given ten years follow-up a little over one in four women return, with nearly half of those using donor sperm and one-third achieving a live birth. What is known alreadyPEF, where women freeze their eggs as a strategy to preserve fertility, has increased dramatically in high income countries in the last decade. Despite the rapid uptake of PEF, there remains limited information to guide women, clinicians and policy makers regarding the characteristics of women undertaking this procedure and treatment outcomes. Study design, size, durationA retrospective population-based cohort study of all women who undertook PEF in Australia and New Zealand between 2009 and 2023, including their subsequent return to thaw their eggs and treatment outcomes. Where women returned to utilise their eggs, all subsequent embryo transfer procedures were linked enabling calculation of live birth rates per woman. Participants/materials, setting, methods20,209 women who undertook PEF in Australia and New Zealand between 2009 and 2023 including 1,657 women who returned to thaw their eggs. Main results and the role of chanceThere has been a huge increase in uptake of PEF, from 55 women in 2009 to 4,919 in 2023. Women who freeze their eggs are typically aged 34-38 years (interquartile range) and nulliparous (98.6%). For women with at least 10 years follow-up (i.e. undertook PEF in 2009-13; N=514), 27.9% returned and thawed their frozen eggs (average time to return: 4.9 years). This reduced to 22.1% in those with at least 5 years follow-up (i.e. undertook PEF in 2009-2018; N=4,288). Of those who used their frozen eggs, 47% used donor sperm. After at least two years follow up, 33.9% had a live birth, rising over time to 37.8% for eggs thawed between 2019-2021. Limitations, reasons for cautionIn the timeframe 2009-2019 we did not have information on whether egg freezing occurred because of a cancer diagnosis, a cohort we wished to exclude from the study. As a result, for this timeframe we weighted observations by the probability that egg freezing occurred due to cancer, with the prediction model developed on the years 2020-2023. Wider implications of the findingsThis study provides recent and comprehensive data on PEF to guide prospective patients and clinicians and inform policy. The exponential growth in PEF in Australia and New Zealand mirrors trends in other high-income countries, suggesting a doubling time of 2-3 years. Study findings highlight the need for setting realistic expectations about the likelihood of returning to use frozen eggs and live birth rates. Study funding/competing interest(s)2020-2025 MRFF Emerging Priorities and Consumer Driven Research initiative: EPCD000014
Trkulja, V.
Show abstract
Background. Recent meta-analyses of randomized controlled trials (RCTs) claimed efficacy of higher-dose fluvoxamine (2 x 100 mg/day, as opposed to 2 x 50 mg/day) in prevention of disease deterioration in adults with mild - moderate COVID-19 disease. Objectives. Investigate whether such claims are supported by the data. Methods. Systematic review and meta-analysis of RCTs evaluating higher-dose fluvoxamine in this indication. Results. Seven studies declared as RCTs were identified, one of which was severely biased (open-label, non-standardized and unreported standard of care as a control), and eventually ended as non-randomized (huge attrition). Composite endpoints of deterioration in the 6 included placebo-controlled trials contained elements susceptible to error and bias. Three trials were small (<100 patients/arm), three were larger (270 - 750 patients/arm). Deaths and need for mechanical ventilation were sporadic and observed in only one trial. Hospitalizations were also sporadic in 5/6 trials. Frequentist methods generally appropriate for random-effects analysis of low number of trials with rare outcomes (generalized linear mixed models, beta-binomial or binomial-normal) greatly underestimated heterogeneity, but still did not document benefits regarding the composite endpoints or hospitalizations. Bayesian hierarchical models revealed huge heterogeneity and indicated no benefit regarding: (i) composites of deterioration, large trials OR = 0.78 (95% CrI 0.55 - 1.21); multiplicity corrected OR = 0.87 (0.64 - 1.21); (ii) hospitalizations, small trials OR = 0.88 (0.45 - 1.72); large trials OR = 0.94 (0.52 - 1.75); all trials OR = 0.81 (0.47 - 1.43). Heterogeneity was unlikely due to clinical particulars (vaccination status, treatment duration, time horizon), and more likely due to unidentified bias. Conclusions. RCTs do not support efficacy of higher-dose fluvoxamine in prevention of disease deterioration in adults with mild - moderate COVID-19 disease.
Murray, K. T.; Fabbri, D. V.; Annis, J. S.; Clark, C. R.; Pulley, J. M.; Brittain, E.; Gailani, D.
Show abstract
In the management of atrial fibrillation, the most frequently prescribed oral anticoagulant is apixaban, given at a fixed dose of 5mg BID. Apixaban is predominantly metabolized by cytochrome P4503A4 (CYP3A4) and is also a substrate for the drug efflux transporter P-glycoprotein (P-gp). In nearly 300,000 Medicare patients with AF receiving apixaban, we previously showed that concomitant therapy with drugs that inhibit both CYP3A4 and P-gp, specifically amiodarone or diltiazem, significantly increased serious bleeding that caused hospitalization and/or death. We hypothesized that this adverse effect was mediated by an increase in apixaban plasma concentrations caused by concomitant therapy that reduced drug elimination. Utilizing left-over samples obtained from clinically indicated blood draws that would typically be discarded, the Vanderbilt University Medical Center biobank BioVU contains >353,000 samples linked to de-identified electronic medical records (EMRs), with both DNA and plasma harvested. Of 35 samples drawn from patients taking apixaban 5mg BID, 5 were identified to be drawn from patients concomitantly taking drugs inhibiting both CYP3A4 and P-gp. Using a chromogenic anti-Xa assay, we found that plasma concentrations of apixaban were significantly higher (347{+/-}64 ng/mL; mean{+/-}SEM) for patients receiving concomitant CYP3A4/P-gp-inhibiting drugs compared to those not treated with these drugs (166{+/-}67 ng/mL; P=0.025, Mann Whitney). There were no differences between the 2 patient groups with respect to age, weight, or serum creatinine. The results of this pilot study provide preliminary data to support our hypothesis, and they demonstrate the practicality of obtaining pharmacokinetic data from a large cohort of plasma samples linked to deidentified EMRs. This approach could be used to define the role of apixaban levels in high-risk clinical scenarios and to better understand the relationship between drug levels and bleeding risk.
Goncalves, B. P.; Franco, E. L.
Show abstract
Timeliness of therapy initiation is a fundamental determinant of outcomes for many medical conditions, most importantly, cancer. Yet, existing inefficiencies in healthcare systems mean that delays between diagnosis and treatment frequently adversely affect the clinical outcome for cancer patients. Although estimates of effects of lag time to therapy would be informative to policymakers considering resource allocation to minimize delays in oncology, causal methods are seldom explicitly discussed in epidemiologic analyses of these lag times. Here, we propose causal estimands for such studies, and outline the protocol of a target trial that could be emulated with observational data on lag times. To illustrate the application of this approach, we simulate studies of lag time to treatment under two scenarios: one in which indication bias (Waiting Time Paradox) is present and another in which it is absent. Although our discussion focuses on oncologic outcomes, components of the proposed target trial could be adapted to study delays for other medical conditions. We believe that the clarity with which causal questions are posed under the target trial emulation framework would lead to improved quantification of the effects of lag times in oncology, and hence to better informed policy decisions.
Gao, S.; Gao, J.; Miles, K.; Madan, J. C.; Pasternack, M.; Wald, E. R.; Gunther, S. H.; Frankovich, J.
Show abstract
Background Group A streptococcus (GAS) infections have been associated with neuropsychiatric disorders in epidemiologic studies and animal models, but data in US health care populations are limited. GAS is also associated with autoimmune sequelae, including acute rheumatic fever (ARF)/Sydenham chorea (SC), poststreptococcal reactive arthritis (PSRA), poststreptococcal glomerulonephritis (PSGN), and guttate psoriasis (GP). Epstein-Barr virus (EBV) has been linked to systemic lupus erythematosus (SLE) and multiple sclerosis (MS) and the complexity of these associations parallels that of GAS-associated conditions, providing a useful comparison. Objectives 1) Assess the association between a positive GAS test and incident neuropsychiatric diagnoses within 1 year in a large US health care database. 2) Assess the validity of the same database in detecting well-established disease associations while avoiding false associations. Design, Setting, Participants Retrospective cohort study using TriNetX data from US health care organizations. Patients with positive or negative tests were propensity score-matched (GAS cohort n=178,301; EBV cohort n=64,854). Patients with documented neuropsychiatric diagnoses prior to testing were excluded. To approximate a primary care population, inclusion required at least one well-visit. Exposures Positive vs negative GAS test; positive vs negative EBV test (separate cohorts). Main Outcomes and Validations Main outcome: incident neuropsychiatric diagnoses within 1 year of GAS testing. Positive control outcomes: ARF/SC, PSRA, PSGN, and GP (for GAS cohort); SLE and MS (for EBV cohort). Negative control outcomes: conditions without known association with GAS. Results After matching, a positive GAS test was associated with attention-deficit/hyperactivity disorder (ADHD) (RR: 1.09; 95% CI: 1.03-1.15). Among established poststreptococcal conditions, only GP was associated with prior GAS (RR: 1.75; 95% CI: 1.06-2.89). Case counts were insufficient to evaluate ARF/SC, PSRA, and PSGN. Negative control outcomes showed no association. In the EBV cohort, no association was observed with SLE, and MS showed a decreased risk. Conclusions and Relevance A positive GAS test was associated with ADHD but not with other neuropsychiatric disorders. The database detected poststreptococcal GP but did not identify most established postinfectious autoimmune associations, likely reflecting rarity, heterogeneity, and diagnostic complexity. These findings begin to describe the range of real-world health care databases to evaluate postinfectious neuropsychiatric risk.